We present Pre-trained Machine Reader (PMR), a novel method to retrofit Pre-trained Language Models (PLMs) into Machine Reading Comprehension (MRC) models without acquiring labeled data. PMR is capable of resolving the discrepancy between model pre-training and downstream fine-tuning of existing PLMs, and provides a unified solver for tackling various extraction tasks. To achieve this, we construct a large volume of general-purpose and high-quality MRC-style training data with the help of Wikipedia hyperlinks and design a Wiki Anchor Extraction task to guide the MRC-style pre-training process. Although conceptually simple, PMR is particularly effective in solving extraction tasks including Extractive Question Answering and Named Entity Recognition, where it shows tremendous improvements over previous approaches especially under low-resource settings. Moreover, viewing sequence classification task as a special case of extraction task in our MRC formulation, PMR is even capable to extract high-quality rationales to explain the classification process, providing more explainability of the predictions.
translated by 谷歌翻译
素描的Wasserstein距离($ W^S $)是专门针对有限混合物分布的新概率距离。给定概率分布的集合$ \ MATHCAL {a} $定义的任何度量$ d $,$ w^s $定义为该指标的最判别凸扩展为space $ \ mathcal {s} = \ textrm {cons}(\ Mathcal {a})$ \ Mathcal {a} $的元素混合物的$。我们的表示定理表明,以这种方式构建的空间$(\ MATHCAL {S},w^s)$对$ \ MATHCAL {x} =(\ Mathcal {a},d)$的wasserstein空间是同构的。该结果为Wasserstein距离建立了普遍性,表明它们的特征是它们具有有限混合物的判别能力。我们利用此表示定理提出了基于Kantorovich--Rubenstein二元性的估计方法,并证明了一般定理,该定理表明其估计误差可以由任何估计混合物重量和混合物组件的误差的总和来限制。这些数量的估计器。在$ p $二维离散$ k $ -mixtures的情况下,我们得出了估计$ w^s $的尖锐统计属性,我们显示的可以估计的速率与$ \ sqrt {k/n} $,达到对数因素。我们对这些边界进行了互补,以估计$ k $ - 点度量空间上的分布之间的瓦斯汀距离的风险,这与我们的上限与对数因素相匹配。该结果是用于估计离散分布之间的Wasserstein距离的第一个接近最小的下限。此外,我们构造了混合物权重的$ \ sqrt {n} $渐变正常的估计器,并得出了我们$ w^s $的估计器的$ \ sqrt {n} $分布限制。仿真研究和数据分析为新素描的瓦斯汀距离的适用性提供了强有力的支持。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
最近,在深图模型的帮助下,表结构识别取得了令人印象深刻的进展。其中大多数利用表格元素的单个视觉线索或通过早期融合来利用其他方式与其他方式结合起来,以推理其图形关系。然而,在多种模式方面既不是早期融合也不是单独的推理,可以适用于具有巨大多样性的表结构。相反,预计不同的方式将以不同的表案例的不同模式相互协作。在社区中,表层结构推理的跨性模特间交互的重要性仍未开发。在本文中,我们将其定义为异构表结构识别(异质-TSR)问题。旨在填补这种差距,我们提出了一种配备有堆叠的协作块的新型神经协作图机(NCGM),其替代地提取了模态上下文并以分层方式模拟了模范间交互。它可以代表表格元件的帧内模特关系更加强大,这显着提高了识别性能。我们还表明,所提出的NCGM可以调制在模态线索的背景下调节不同方式的不同方式的协同模式,这对于多元化表案例至关重要。基准测试的实验结果证明了我们所提出的NCGM实现最先进的性能,并通过较大的余量击败其他当代方法,特别是在挑战性的情况下。
translated by 谷歌翻译
知识丰富的语言代表学习在各种知识密集型的NLP任务中表现出了有希望的表现。但是,现有的知识语言模型都培训了单格式知识图数据,这将其应用限制为更多语言。在这项工作中,我们向预先rain基于知识的多语言语言模型(KMLMS)提出了一种新颖的框架。我们首先使用Wikidata知识图来生成大量的代码切换合成句和基于推理的多语言训练数据。然后基于所生成的数据的内部和际际结构,我们设计预先升温任务,以促进知识学习,这允许语言模型不仅存储事实知识,还可以学习有用的逻辑模式。我们的预制kmlms展示了对广泛知识密集型的交叉线路任务的显着性能,包括指定实体识别,事实知识检索,关系分类以及我们设计的新任务,即逻辑推理。我们的代码和预付费语言模型将公开可用。
translated by 谷歌翻译
本文研究了主题模型中高维,离散,可能稀疏的混合模型的估计。数据包括在$ n $独立文档中观察到的$ p $单词的多项式计数。在主题模型中,$ p \ times n $预期的单词频率矩阵被认为被分解为$ p \ times k $ word-top-topic矩阵$ a $ a $和a $ k \ times n $ topic-document $ t $ t $ 。由于两个矩阵的列代表属于概率简单的条件概率,因此$ a $的列被视为$ p $ - 二维混合组件,这些混合组件是所有文档共有的,而$ t $的列被视为$ k $二维的混合物特定文档并允许稀疏的权重。主要的兴趣是提供鲜明的,有限的样本,$ \ ell_1 $ norm收敛速率,用于混合物重量$ t $的估计量,当$ a $是已知或未知时。对于已知的$ a $,我们建议MLE估计为$ t $。我们对MLE的非标准分析不仅建立了其$ \ ell_1 $收敛率,而且揭示了一个非凡的属性:MLE,没有额外的正则化,可能完全稀疏,并且包含$ t $的真实零模式。我们进一步表明,MLE既是最佳的最佳选择,又适应了一大批稀疏主题分布中未知的稀疏性。当$ a $未知时,我们通过优化与$ a $ a $的插件的可能性功能来估计$ t $。对于任何满足与$ a $ $ a $的详细条件的估计器$ \ hat {a} $,显示出$ t $的估计器可保留为MLE建立的属性。环境尺寸$ k $和$ p $可以随着样本量而增长。我们的应用是对文档生成分布之间1-Wasserstein距离的估计。我们建议,估计和分析两个概率文档表示之间的新1-Wasserstein距离。
translated by 谷歌翻译
找到给定矩阵的独特低维分解的问题是许多领域的基本和经常发生的问题。在本文中,我们研究了寻求一个唯一分解的问题,以\ mathbb {r} ^ {p \ times n} $ in \ mathbb {p \ time n} $。具体来说,我们考虑$ y = ax \ in \ mathbb {r} ^ {p \ time n} $,其中矩阵$ a \ in \ mathbb {r} ^ {p \ times r} $具有全列等级,带有$ r <\ min \ {n,p \} $,矩阵$ x \ in \ mathbb {r} ^ {r \ times n} $是元素 - 方向稀疏。我们证明,可以唯一确定$ y $的稀疏分解,直至某些内在签名排列。我们的方法依赖于解决在单位球体上限制的非凸优化问题。我们对非透露优化景观的几何分析表明,任何{\ em strict}本地解决方案靠近地面真相解决方案,可以通过任何二阶序列算法遵循的简单数据驱动初始化恢复。最后,我们用数值实验证实了这些理论结果。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译